The New Silicon Alliance

OpenAI's Pact with AMD Signals a Shift in the AI Arms Race

Article created and last updated on: Monday 06 October 2025 13:16

Abstract

A landmark, multi-billion dollar agreement has been forged between the artificial intelligence research laboratory OpenAI and the semiconductor designer Advanced Micro Devices (AMD), signalling a significant strategic realignment in the global technology landscape. Under the terms of the partnership, AMD will supply OpenAI with vast quantities of its high-performance graphics processing units (GPUs), amounting to six gigawatts of computing power, to build out its next-generation AI infrastructure. This move represents a direct challenge to the market dominance of Nvidia, the current leading supplier of AI chips, and reflects a broader industry trend of de-risking supply chains for the critical components that power the artificial intelligence revolution. The deal unfolds against a backdrop of global economic volatility, where diverse sectors are grappling with their own unique pressures. In the United Kingdom, luxury carmaker Aston Martin has issued a profit warning, citing the dual threats of potential United States tariffs and persistent supply chain disruptions. Meanwhile, in Japan, the beverage giant Asahi has been forced to restart beer production after a debilitating cyber-attack crippled its operations, highlighting the increasing vulnerability of manufacturing to digital threats. These disparate events underscore the interconnected yet fragile nature of the modern global economy, where strategic technological partnerships, geopolitical trade tensions, and cybersecurity have become paramount concerns.

Key Historical Facts

Key New Facts

Introduction

The relentless advancement of artificial intelligence, particularly in the domain of large language models and generative AI, has precipitated an insatiable demand for computational power. At the heart of this technological surge lies a specialised class of semiconductor: the graphics processing unit, or GPU. These powerful chips, originally designed for rendering complex video game graphics, have proven exceptionally adept at the parallel processing tasks required to train and operate sophisticated AI systems. A landmark agreement announced on 6 October 2025, between OpenAI, the creator of ChatGPT, and the chipmaker AMD, has sent shockwaves through the technology sector.

The multi-year, multi-billion dollar partnership will see AMD supply OpenAI with GPUs equivalent to six gigawatts of computing power, a massive-scale deployment intended to form the bedrock of OpenAI's future AI infrastructure. The first phase of this collaboration is scheduled to commence in the second half of 2026, utilising AMD's forthcoming MI450 series of chips. This strategic alliance is deeply significant for several reasons. It marks a concerted effort by OpenAI to diversify its hardware supply chain, thereby reducing its heavy reliance on the current market leader, Nvidia. For AMD, securing a partnership with the world's most prominent AI company serves as a powerful endorsement of its technology and presents a formidable challenge to Nvidia's long-held supremacy.

This pivotal development in the high-stakes world of AI hardware does not occur in a vacuum. It is set against a complex and often turbulent global economic environment. In the United Kingdom's automotive sector, luxury brand Aston Martin faces significant headwinds, issuing a profit warning due to concerns over potential US trade tariffs and ongoing supply chain difficulties. This situation contrasts with the fortunes of its domestic rival, Jaguar Land Rover, which has reported strong profits. In another corner of the global economy, Japanese brewer Asahi has been grappling with the fallout from a severe ransomware attack that halted production of its popular beers, forcing the company into a manual, paper-based ordering system before recently resuming operations at its breweries. These events, though sector-specific, illustrate the pervasive challenges of geopolitical friction, supply chain fragility, and cybersecurity threats that define the contemporary corporate landscape.

The Unquenchable Thirst for Computation: The Rise of AI Superstructures

The emergence of large-scale artificial intelligence models, such as OpenAI's GPT series, has fundamentally altered the landscape of computing. These models are not merely sophisticated software; they are vast, intricate networks of digital neurons that require an extraordinary amount of computational power to both create and operate. The process is typically divided into two distinct phases: training and inference.

Training is the computationally intensive process of teaching an AI model. It involves feeding the model colossal datasets—in the case of language models, this can be a significant portion of the public internet—and repeatedly adjusting the connections between its artificial neurons to improve its performance on specific tasks, such as generating human-like text or writing computer code. This process is akin to building a skyscraper; it is a monumental, one-off effort that requires immense resources. The cost of training a state-of-the-art model can run into the hundreds of millions, or even billions, of pounds, driven primarily by the sheer energy and processing time consumed by thousands of specialised chips working in concert for weeks or months on end.

Inference, by contrast, is the process of using the trained model to perform a task, such as answering a query or generating an image. While a single inference task is far less demanding than the training process, the cumulative computational cost can be enormous when a service like ChatGPT is used by hundreds of millions of people. This phase is more analogous to the day-to-day running of the completed skyscraper; it requires a constant supply of energy and maintenance to serve its inhabitants.

The hardware that underpins both of these phases is the AI accelerator, a category of microprocessor dominated by the GPU. Originally developed to accelerate the rendering of 3D graphics in video games, GPUs possess an architecture that is highly parallelised. This means they are designed to perform many thousands of relatively simple calculations simultaneously. This characteristic makes them exceptionally well-suited to the mathematical operations, primarily matrix multiplications, that are the building blocks of neural networks.

The infrastructure required to house and operate these accelerators is on a scale previously unseen in the technology industry. It involves the construction of massive data centres, often referred to as 'AI factories', which are vast buildings filled with racks of servers, each containing multiple GPUs. These facilities require sophisticated cooling systems to dissipate the immense heat generated by the chips and a power supply equivalent to that of a small city. OpenAI's ambition, under a massive infrastructure initiative codenamed 'Stargate', is to build out a capacity of 10 gigawatts, a project with an estimated cost of $500 billion. The partnership with AMD, committing to six gigawatts of its chips, is a cornerstone of this monumental buildout.

The Reign of Nvidia: A Kingdom Built on CUDA

For over a decade, the market for AI accelerators has been overwhelmingly dominated by one company: Nvidia. The Silicon Valley firm, once known primarily by computer gaming enthusiasts, astutely recognised the potential of its GPUs for general-purpose computing and has since established a near-monopoly in the AI hardware space, with market share estimates ranging from 70% to as high as 95%. This dominance is not merely a result of superior hardware, but is deeply entrenched through a powerful and mature software ecosystem.

Nvidia's success is built upon a platform called CUDA, which stands for Compute Unified Device Architecture. Introduced in 2007, CUDA is a parallel computing platform and programming model that allows developers to use a variant of the C programming language to utilise the massive parallel processing power of Nvidia GPUs for a wide range of tasks beyond graphics. Over the years, Nvidia has invested heavily in building out a comprehensive suite of libraries, tools, and frameworks optimised for CUDA, particularly for scientific computing and, most crucially, for deep learning.

This software ecosystem has created a formidable competitive advantage, often referred to as a 'moat'. Researchers and developers in the AI field have spent years building their models and workflows using Nvidia's software tools. Switching to a competitor's hardware would often necessitate a painful and time-consuming process of rewriting code and adapting to a new, and typically less mature, software environment. This has created immense customer lock-in and has made it exceedingly difficult for rivals to gain a foothold, even if they could produce competitive hardware.

The company's recent product lines, such as the 'Hopper' architecture-based H100 GPU and its successor, the 'Blackwell' platform, have become the industry standard for training and deploying large-scale AI models. The demand for these chips has been so intense that it has frequently outstripped supply, allowing Nvidia to command premium prices and achieve extraordinary profit margins. This has propelled Nvidia to become one of the world's most valuable companies.

However, this very dominance has created a strategic vulnerability for its customers. Major technology companies and AI labs, including OpenAI, have become almost entirely dependent on a single supplier for the most critical component of their infrastructure. This reliance creates risks related to supply chain disruptions, pricing power, and a potential slowing of innovation due to a lack of competition. It is this strategic imperative to mitigate dependency that has driven major players in the AI field to actively seek and cultivate viable alternatives, creating the opportunity that AMD is now seizing.

A Challenger Emerges: The AMD Offensive

Advanced Micro Devices, or AMD, has long been a significant player in the semiconductor industry, primarily known for its decades-long rivalry with Intel in the market for central processing units (CPUs) and its competition with Nvidia in the consumer graphics card market. Under the leadership of its chief executive, Dr. Lisa Su, AMD has undergone a remarkable transformation, re-establishing itself as a technological leader in high-performance computing. The company is now mounting its most serious challenge yet to Nvidia's AI dominance.

The cornerstone of AMD's strategy is its 'Instinct' line of data centre GPUs. These accelerators are designed specifically for the demands of high-performance computing (HPC) and artificial intelligence workloads. The latest generation of this technology is built on the CDNA 3 architecture, which represents a significant leap in performance and efficiency. The flagship product, the Instinct MI300X, is the chip that has captured the attention of OpenAI and the broader AI industry.

The MI300X is an architectural marvel, employing an advanced 'chiplet' design. Rather than manufacturing a single, large monolithic processor, which can be prone to defects and lower yields, the MI300X is constructed by integrating multiple smaller, specialised chiplets onto a single package. This approach allows AMD to combine different manufacturing processes and optimise each part of the accelerator for its specific function. The MI300X features eight accelerator chiplets, delivering a total of 304 compute units.

One of the key technical advantages that AMD has engineered into the MI300X is its memory capacity and bandwidth. The accelerator is equipped with 192 gigabytes of HBM3 (High Bandwidth Memory), which is significantly more than its direct Nvidia competitor, the H100. This larger memory capacity is particularly beneficial for inference and for running very large AI models, as it allows more of the model's parameters to be stored directly on the chip, reducing the need to access slower system memory and thereby improving performance. The MI300X boasts a peak memory bandwidth of 5.3 terabytes per second.

To connect multiple GPUs together to work on a single, massive task—a necessity for training large models—AMD has developed its own high-speed interconnect technology called Infinity Fabric. An eight-GPU platform based on the MI300X uses this fabric to provide a peak aggregate bidirectional bandwidth of 896 gigabytes per second between the accelerators.

Despite these impressive hardware specifications, the greatest challenge for AMD remains its software ecosystem. The company's alternative to Nvidia's CUDA is called ROCm (Radeon Open Compute platform). While ROCm is an open-source platform, which is attractive to some developers, it has historically lagged behind CUDA in terms of maturity, feature set, and the breadth of support from the AI community. Overcoming this software gap is critical for AMD's long-term success. A large-scale commitment from a major player like OpenAI will provide a powerful incentive for developers to invest time and resources into optimising their models for the ROCm platform, potentially creating a virtuous cycle that accelerates its adoption.

The OpenAI-AMD Nexus: A Strategic Imperative

The partnership between OpenAI and AMD is a multi-faceted agreement born of mutual strategic necessity. The deal, announced as a commitment to deploy six gigawatts of AMD's GPUs, is expected to generate tens of billions of dollars in revenue for the chipmaker over its multi-year duration. As part of the arrangement, OpenAI has also been issued with warrants that could allow it to acquire up to 160 million shares in AMD, representing a potential stake of approximately 10% of the company, contingent on meeting specific deployment milestones.

For OpenAI, the motivations are clear and compelling. The primary driver is the de-risking of its hardware supply chain. The extreme concentration of the AI accelerator market in the hands of a single supplier, Nvidia, presents an unacceptable level of risk for a company whose entire future depends on access to vast amounts of computational power. By cultivating AMD as a second, viable, high-volume supplier, OpenAI not only secures an alternative source of chips but also introduces a powerful competitive dynamic into the market. This competition is expected to exert downward pressure on prices, potentially saving OpenAI billions of dollars in the long run as it builds out its ambitious infrastructure.

Furthermore, the technical specifications of AMD's MI300X accelerator, particularly its large on-chip memory capacity, may offer specific performance advantages for OpenAI's workloads, especially in the realm of inference. OpenAI's chief executive, Sam Altman, has highlighted the immense difficulty in securing the necessary compute, stating that the partnership is a major step in building the capacity needed to realise AI's full potential.

From AMD's perspective, the deal is a transformative victory. Securing a public, large-scale commitment from the most recognised and influential AI company in the world provides an unparalleled validation of its Instinct GPU platform. It instantly elevates AMD's status from a peripheral player to a credible top-tier supplier in the lucrative AI data centre market. This "design win" is about more than just the direct revenue; it is a powerful marketing tool that will encourage other cloud providers and AI companies to seriously evaluate and adopt AMD's solutions.

The partnership also provides a critical boost to AMD's ROCm software ecosystem. With OpenAI now committed to deploying AMD hardware at scale, there is a strong business imperative for the AI lab's engineers to work closely with AMD to optimise ROCm for their models. This collaboration will likely lead to significant improvements in the software's performance, stability, and feature set, benefiting the entire community of potential AMD customers. As stated by AMD's CEO, Dr. Lisa Su, the partnership brings together the best of both companies to enable the world's most ambitious AI buildout and advance the entire AI ecosystem. The involvement of Microsoft, which is a key partner to OpenAI and a major customer of both Nvidia and AMD for its Azure cloud platform, adds another layer of complexity and significance to this evolving landscape.

A Volatile Global Market: Corporate Fortunes in Flux

The strategic manoeuvres in the high-technology sector are unfolding within a broader global economic context characterised by uncertainty and sector-specific challenges. While the digital frontier of artificial intelligence is defined by exponential growth and intense competition for computational supremacy, more traditional industries are navigating a landscape shaped by geopolitical tensions, supply chain vulnerabilities, and the ever-present threat of cyber-attacks. The recent experiences of major British and Japanese corporations provide a stark illustration of these varied pressures.

In the United Kingdom's prestigious luxury automotive sector, fortunes have diverged. Aston Martin has been compelled to issue a profit warning, signalling to investors that its future earnings are likely to be lower than previously anticipated. The company has identified two primary sources of concern: the persistent disruption to global supply chains, which affects the availability of components and the efficiency of production, and the looming threat of renewed trade tariffs from the United States. The US is a critical market for UK car exports, accounting for just under 17% of the total in 2024. The potential imposition of a 25% tariff on imported vehicles could significantly increase the price of British-made cars for American consumers, thereby depressing demand and impacting profitability.

This challenging outlook for Aston Martin contrasts sharply with the recent performance of Jaguar Land Rover (JLR). The company reported a pre-tax profit of £2.5 billion for the financial year ending 31 March 2025, its best performance in a decade. This success was driven by record wholesale volumes of its popular Defender model and strong sales of the Range Rover Sport. JLR's robust performance highlights how different companies within the same sector can experience vastly different outcomes based on their product mix, market positioning, and operational resilience.

Meanwhile, an entirely different form of disruption has afflicted one of Japan's most iconic brands. Asahi Group Holdings, the brewer of Asahi Super Dry beer, fell victim to a major cyber-attack in late September and early October 2025. The incident, identified as a ransomware attack, crippled the company's computer systems, halting order processing, shipments, and production at most of its approximately 30 factories across Japan. The attack forced Asahi to revert to manual order-taking using paper and fax machines and led to widespread shortages of its products in restaurants and convenience stores. The company confirmed on 6 October that it had resumed production at its six main beer plants, but the incident has served as a potent reminder of the vulnerability of modern, highly-connected manufacturing operations to digital threats.

The Digital Ghost in the Machine: Asahi's Cyber Onslaught

The cyber-attack that struck Asahi Group Holdings in late September 2025 serves as a critical case study in the operational vulnerabilities facing the modern manufacturing sector. The incident began on September 29, when a system outage was first detected, which quickly escalated, crippling the company's core business functions. On October 3, the company confirmed that its servers had been infected with ransomware, a malicious type of software that encrypts a victim's data, rendering it inaccessible until a ransom is paid.

The impact was immediate and severe. The ransomware paralysed Asahi's sophisticated order and distribution systems, which in turn forced a halt to production at the majority of its Japanese factories. Shipments of key products, including the globally recognised Asahi Super Dry beer, were interrupted, leading to a ripple effect throughout Japan's extensive retail and hospitality industries. Convenience store chains reported empty shelves, and restaurant and bar owners scrambled to find alternative supplies. The disruption was so profound that the company was forced to postpone the launch of ten new products.

In response to the crisis, Asahi was compelled to regress technologically, resorting to taking orders manually via paper and fax machines to maintain a limited flow of products. An emergency task force was established to work with external cybersecurity experts to investigate the breach and restore the affected systems. The company also noted that there were traces of a possible data leak, although the full extent of any exfiltrated information remains under investigation.

By Monday, 6 October, Asahi announced that it had managed to resume production at its six main domestic breweries. However, the company stated that it did not yet have a clear timeline for the full restoration of its computer systems, and a complete resumption of output across all its approximately 30 affected factories was not expected in the immediate future.

This event underscores a growing and perilous trend. The manufacturing industry's increasing reliance on digital technologies and interconnected systems—often referred to as Industry 4.0—has created tremendous efficiencies but has also dramatically expanded the potential 'attack surface' for malicious actors. Industrial Control Systems (ICS), which manage physical processes on the factory floor, are increasingly being targeted. Ransomware attacks on such systems are a significant concern, as the disruption to production can lead to catastrophic financial losses, supply chain chaos, and severe reputational damage. The Asahi incident is a stark warning that in an era of digital transformation, cybersecurity can no longer be considered a peripheral IT issue but must be treated as a core operational and strategic priority for all manufacturing enterprises.

Market Pulse: A Snapshot of British Business

Beyond the headline-grabbing events in the technology and automotive sectors, other indicators offer a broader perspective on the health of the British economy. The leisure travel industry, in particular, has shown remarkable resilience and growth, as demonstrated by the performance of Jet2 plc.

For the financial year ending 31 March 2025, Jet2 reported record-breaking results. The company's revenue increased by 15% to £7.17 billion, while its profit before tax rose by 12% to £593.2 million. This strong financial performance was driven by a significant increase in customer numbers, with the total of flown passengers growing by 12% to reach 19.77 million.

Jet2's strategy has focused on expanding its operational footprint within the United Kingdom. The company successfully launched new bases at Bournemouth and London Luton airports, extending its reach to a point where it can now serve approximately 85% of the UK population living within a 90-minute drive of one of its thirteen airport bases. This expansion reflects a sustained and robust demand for leisure travel among British consumers. In a sign of confidence in its financial position and future prospects, the company also announced its intention to launch a share buyback programme of up to £250 million and increased its final dividend to shareholders.

The performance of companies like Jet2 provides a valuable counterpoint to the challenges faced in other sectors. While manufacturers grapple with supply chains and geopolitical risks, the strong results in the travel sector suggest that consumer spending on services and experiences remains a powerful engine of economic activity. This mixed picture reflects the complex, multi-speed nature of the modern economy, where different industries respond to distinct opportunities and pressures.

Conclusion

The confluence of events in early October 2025 paints a vivid picture of a global economy at a crossroads, defined by both groundbreaking technological acceleration and persistent, tangible-world vulnerabilities. The landmark partnership between OpenAI and AMD is more than a mere corporate supply agreement; it represents a fundamental shift in the technological arms race for dominance in artificial intelligence. By challenging the entrenched incumbency of Nvidia, this new alliance promises to foster a more competitive and dynamic market for the essential hardware that will power future innovations. This development will have far-reaching consequences, potentially accelerating the pace of AI development, altering the economics of computation, and reshaping strategic dependencies across the entire technology industry.

Simultaneously, the challenges confronting companies like Aston Martin and Asahi serve as a crucial reminder that the digital and physical economies are inextricably linked. The profit warning from the British carmaker, driven by the spectres of trade protectionism and logistical friction, highlights the enduring influence of geopolitics and the fragility of global supply chains. The successful cyber-attack on the Japanese brewer demonstrates with alarming clarity the profound operational risks that accompany digital transformation, proving that even the most established manufacturing processes can be brought to a standstill by intangible threats.

These narratives, though originating in disparate sectors and geographies, converge on a central theme of interdependence and risk in the 21st-century economy. The future will be shaped not only by the architects of artificial intelligence and the designers of silicon chips but also by the ability of all industries to navigate a complex web of international trade relations, secure their digital infrastructure, and build resilience into their physical operations. The new silicon alliance between OpenAI and AMD marks the opening of a new front in the AI revolution, but the concurrent struggles in manufacturing and automotive underscore the multifaceted and unpredictable nature of the global landscape in which this revolution is unfolding.

References

  1. OpenAI and AMD announce multibillion-dollar partnership — AMD to supply 6 gigawatts in chips, OpenAI could get up to 10% of AMD shares in return | Tom's Hardware. (2025, October 6). Retrieved from
  2. Addressing growing concerns about cybersecurity in manufacturing - IBM. Retrieved from
  3. Asahi Cyberattack Halts Super Dry Beer Production, Sparking Shortage Fears - Tokyo Weekender. (2025, October 6). Retrieved from
  4. CISA Insights - Cyber Threats to Critical Manufacturing Sector Industrial Control Systems (ICS). Retrieved from
  5. JLR DELIVERS STRONG FULL YEAR PERFORMANCE | JLR Media Newsroom. (2025, May 13). Retrieved from
  6. AMD Instinct MI300X: A Generative AI Accelerator and Platform Architecture. Retrieved from
  7. Cybersecurity for Smart Factories in the Manufacturing Industry | Deloitte US. Retrieved from
  8. Jet2 Reports Record Financial Performance and Strategic Expansion - ADVFN UK. (2025, July 9). Retrieved from
  9. Asahi Struggles After Massive Cyberattack, Beer Production Grinds to Halt | JAPAN Forward. (2025, October 6). Retrieved from
  10. JLR announces record profits - The Jaguar Drivers' Club. (2025, May 16). Retrieved from
  11. Jet2 - Final Results July 9, 2025 - James Sharp & Co. (2025, July 9). Retrieved from
  12. Nvidia's Dominance in the AI Chip Market: Unraveling the Future of Industry. (2024, September 16). Retrieved from
  13. Asahi Resumes Beer Production after Cyberattack - nippon.com. (2025, October 6). Retrieved from
  14. Jet2 plc Reports Record Financial Performance for 2025 - TipRanks.com. (2025, July 9). Retrieved from
  15. Trump's 25% Tariffs on Car Imports to the US – How Will This Affect the UK Automotive Industry? - Fleet Service GB. Retrieved from
  16. Cybersecurity in the Manufacturing Industry | UpGuard. Retrieved from
  17. Jet2 says full year results meets expectations, to launch £250 million share buyback. (2025, April 30). Retrieved from
  18. OpenAI and AMD Partnership Beyond Intel - TorontoStarts. (2024, November 30). Retrieved from
  19. AMD shares surge after AI chip deal with OpenAI - Investing.com. (2025, October 6). Retrieved from
  20. Top 7 Cyber Security Risks in Manufacturing 2025 - ITOPIA. Retrieved from
  21. AMD Instinct MI300X Platform. Retrieved from
  22. AMD Instinct - Wikipedia. Retrieved from
  23. OpenAI's AI Hardware Gambit: Strategic Sourcing and the Future of Compute Infrastructure. (2025, September 19). Retrieved from
  24. Japan days away from running out of Asahi Super Dry due to cyber attack – reports. (2025, October 3). Retrieved from
  25. Preliminary Results 2025 - Jet2 plc. (2025, July 8). Retrieved from
  26. JLR delivers 11th successive profitable quarter amid challenging global economic conditions | Automotive World. (2025, August 8). Retrieved from
  27. AMD and OpenAI Announce Strategic Partnership to Deploy 6 Gigawatts of AMD GPUs. (2025, October 6). Retrieved from
  28. Japan's Asahi restarts beer production following cyberattack - China Daily. (2025, October 6). Retrieved from
  29. AMD and OpenAI announce strategic partnership to deploy 6 gigawatts of AMD GPUs. (2025, October 6). Retrieved from
  30. OpenAI and chipmaker AMD sign chip supply partnership for AI infrastructure | AP News. (2025, October 6). Retrieved from
  31. OpenAI and chipmaker AMD sign chip supply partnership for AI infrastructure. (2025, October 6). Retrieved from
  32. OpenAI and chipmaker AMD sign chip supply partnership for AI infrastructure. (2025, October 6). Retrieved from
  33. Why Did AMD Stock Jump in Premarket Today: AI Chip Deal With OpenAI - Tokenist. (2025, October 6). Retrieved from
  34. Jaguar Be Damned: JLR Just Had its Best Profit in a Decade - Motor1.com. (2025, May 14). Retrieved from
  35. Competition heats up to challenge Nvidia's AI chip dominance - The Economic Times. (2025, October 6). Retrieved from
  36. US automotive tariffs: How will they impact the UK? - Henley Business School. (2025, April 1). Retrieved from
  37. How Nvidia's AI Made It the World's Most Valuable Firm | Technology Magazine. (2024, November 8). Retrieved from
  38. Rivals race to break Nvidia's AI chip dominance - Daily Times Of Bangladesh. (2025, October 6). Retrieved from
  39. Why Nvidia reigns supreme in the market for AI chips - Marketplace. (2024, June 20). Retrieved from
  40. Which UK carmakers will be hit hardest by Trump's latest import tariffs? | Automotive industry. (2025, March 28). Retrieved from
  41. PwC comments on the impact of US trade tariffs on the UK automotive sector. (2025, April 3). Retrieved from
  42. The impact of US tariffs on the UK auto sector – considerations for lenders | Grant Thornton. (2025, April 24). Retrieved from
  43. Annual Report 2025 | JLR Corporate Website. Retrieved from
  44. AMD Instinct™ MI300 series microarchitecture. Retrieved from
  45. AMD Instinct MI300 Series Architecture Deep Dive Reveal: Advancing AI And HPC. (2023, December 6). Retrieved from
  46. Scaling Stargate: OpenAI's Five New U.S. Data Centers Push Toward 10 GW AI Infrastructure. (2025, September 29). Retrieved from
  47. OpenAI and NVIDIA Announce Strategic Partnership to Deploy 10 Gigawatts of NVIDIA Systems. (2025, September 22). Retrieved from
  48. OpenAI might be building its own chip, but it'll still be dependent on Nvidia — custom chip developed with Broadcom reportedly slips to Q3 2026 | Tom's Hardware. (2025, September 30). Retrieved from
  49. OpenAI and AMD Forge a Monumental AI Hardware Alliance! | AI News - OpenTools. (2025, October 6). Retrieved from